5 research outputs found

    A Study on the Impact of Locality in the Decoding of Binary Cyclic Codes

    Full text link
    In this paper, we study the impact of locality on the decoding of binary cyclic codes under two approaches, namely ordered statistics decoding (OSD) and trellis decoding. Given a binary cyclic code having locality or availability, we suitably modify the OSD to obtain gains in terms of the Signal-To-Noise ratio, for a given reliability and essentially the same level of decoder complexity. With regard to trellis decoding, we show that careful introduction of locality results in the creation of cyclic subcodes having lower maximum state complexity. We also present a simple upper-bounding technique on the state complexity profile, based on the zeros of the code. Finally, it is shown how the decoding speed can be significantly increased in the presence of locality, in the moderate-to-high SNR regime, by making use of a quick-look decoder that often returns the ML codeword.Comment: Extended version of a paper submitted to ISIT 201

    Improving Robustness via Tilted Exponential Layer: A Communication-Theoretic Perspective

    Full text link
    State-of-the-art techniques for enhancing robustness of deep networks mostly rely on empirical risk minimization with suitable data augmentation. In this paper, we propose a complementary approach motivated by communication theory, aimed at enhancing the signal-to-noise ratio at the output of a neural network layer via neural competition during learning and inference. In addition to minimization of a standard end-to-end cost, neurons compete to sparsely represent layer inputs by maximization of a tilted exponential (TEXP) objective function for the layer. TEXP learning can be interpreted as maximum likelihood estimation of matched filters under a Gaussian model for data noise. Inference in a TEXP layer is accomplished by replacing batch norm by a tilted softmax, which can be interpreted as computation of posterior probabilities for the competing signaling hypotheses represented by each neuron. After providing insights via simplified models, we show, by experimentation on standard image datasets, that TEXP learning and inference enhances robustness against noise and other common corruptions, without requiring data augmentation. Further cumulative gains in robustness against this array of distortions can be obtained by appropriately combining TEXP with data augmentation techniques

    Exploiting Locality for Improved Decoding of Binary Cyclic Codes

    No full text
    In this paper, we show how the presence of locality within a binary cyclic code can be exploited to improve decoding performance and to reduce decoding complexity. We pursue two approaches. Under the first approach, we show how the ordered statistics decoding (OSD) method can be modified by inserting a simple single round belief-propagation step at the start that involves only the local codes. The resultant locality-aware OSD algorithm yields an appreciable signal-to-noise ratio (SNR) gain for a given level of reliability and essentially the same level of decoder complexity. Under the second, trellis decoding approach, we show that the careful introduction of locality results in the creation of a cyclic subcode that possesses lower maximum state complexity. In addition, we present a simple means of deriving an upper bound to the state complexity profile of any cyclic code that is based only on the zeros of the code. Furthermore, we show how the decoding speed of either locality-aware OSD or trellis decoding can be significantly increased in the presence of locality, in the moderate-to-high SNR regime, by making the use of a quick-look decoder that often returns the maximum likelihood code word

    Exploiting Locality for Improved Decoding of Binary Cyclic Codes

    No full text
    corecore